115 research outputs found

    A VLSI-oriented and power-efficient approach for dynamic texture recognition applied to smoke detection

    Get PDF
    The recognition of dynamic textures is fundamental in processing image sequences as they are very common in natural scenes. The computation of the optic flow is the most popular method to detect, segment and analyse dynamic textures. For weak dynamic textures, this method is specially adequate. However, for strong dynamic textures, it implies heavy computational load and therefore an important energy consumption. In this paper, we propose a novel approach intented to be implemented by very low-power integrated vision devices. It is based on a simple and flexible computation at the focal plane implemented by power-efficient hardware. The first stages of the processing are dedicated to remove redundant spatial information in order to obtain a simplified representation of the original scene. This simplified representation can be used by subsequent digital processing stages to finally decide about the presence and evolution of a certain dynamic texture in the scene. As an application of the proposed approach, we present the preliminary results of smoke detection for the development of a forest fire detection system based on a wireless vision sensor network.Junta de Andalucía (CICE) 2006-TIC-235

    Dynamic Energy Return on Energy Investment (EROI) and material requirements in scenarios of global transition to renewable energies

    Get PDF
    Producción CientíficaA novel methodology is developed to dynamically assess the energy and material investments required over time to achieve the transition from fossil fuels to renewable energy sources in the electricity sector. The obtained results indicate that a fast transition achieving a 100% renewable electric system globally by 2060 consistent with the Green Growth narrative could decrease the EROI of the energy system from current ~12:1 to ~3:1 by the mid-century, stabilizing thereafter at ~5:1. These EROI levels are well below the thresholds identified in the literature required to sustain industrial complex societies. Moreover, this transition could drive a substantial re-materialization of the economy, exacerbating risk availability in the future for some minerals. Hence, the results obtained put into question the consistence and viability of the Green Growth narrative.Ministerio de Economía, Industria y Competitividad (Project no. FJCI-2016-28833)MEDEAS project, funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 69128

    ACE 16k based stand-alone system for real-time pre-processing tasks

    Get PDF
    This paper describes the design of a programmable stand-alone system for real time vision pre-processing tasks. The system's architecture has been implemented and tested using an ACE16k chip and a Xilinx xc4028xl FPGA. The ACE16k chip consists basically of an array of 128×128 identical mixed-signal processing units, locally interacting, which operate in accordance with single instruction multiple data (SIMD) computing architectures and has been designed for high speed image pre-processing tasks requiring moderate accuracy levels (7 bits). The input images are acquired using the optical input capabilities of the ACE16k chip, and after being processed according to a programmed algorithm, the images are represented at real time on a TFT screen. The system is designed to store and run different algorithms and to allow changes and improvements. Its main board includes a digital core, implemented on a Xilinx 4028 Series FPGA, which comprises a custom programmable Control Unit, a digital monochrome PAL video generator and an image memory selector. Video SRAM chips are included to store and access images processed by the ACE16k. Two daughter boards hold the program SRAM and a video DAC-mixer card is used to generate composite analog video signal.European Commission IST2001 – 38097Ministerio de Ciencia y Tecnología TIC2003 – 09817- C02 – 01Office of Naval Research (USA) N00014021088

    Locust-inspired vision system on chip architecture for collision detection in automotive applications

    Get PDF
    This paper describes a programmable digital computing architecture dedicated to process information in accordance to the organization and operating principles of the four-layer neuron structure encountered at the visual system of Locusts. This architecture takes advantage of the natural collision detection skills of locusts and is capable of processing images and ascertaining collision threats in real-time automotive scenarios. In addition to the Locust features, the architecture embeds a Topological Feature Estimator module to identify and classify objects in collision course.European Commission IST2001 - 38097Ministerio de Ciencia y Tecnología TIC2003 - 09817- C02 - 0

    Comparative study of speed estimators with highly noisymeasurement signals for Wind Energy Generation Systems

    Full text link
    This paper presents a comparative study of several speed estimators to implement a sensorless speed control loop in Wind Energy Generation Systems driven by power factor correction three-phase boost rectifiers. This rectifier topology reduces the low frequency harmonics contents of the generator currents and, consequently, the generator power factor approaches unity whereas undesired vibrations of the mechanical system decrease. For implementation of the speed estimators, the compared techniques start from the measurement of electrical variables like currents and voltages, which contain low frequency harmonics of the fundamental frequency of the wind generator, as well as switching frequency components due to the boost rectifier. In this noisy environment it has been analyzed the performance of the following estimation techniques: Synchronous Reference Frame Phase Locked Loop, speed reconstruction by measuring the dc current and voltage of the rectifier and speed estimation by means of both an Extended Kalman Filter and a Linear Kalman Filter. © 2010 Elsevier Ltd.The first author thanks the support of the Instituto Politecnico Nacional (IPN) to finance his stay at the Universidad Politecnica de Valencia (UPV). This work was supported by the Spanish Ministry of Science and Innovation under Grant ENE2009-13998-C02-02.Carranza Castillo, O.; Figueres Amorós, E.; Garcerá Sanfeliú, G.; González Morales, LG. (2011). Comparative study of speed estimators with highly noisymeasurement signals for Wind Energy Generation Systems. Applied Energy. 88(3):805-813. https://doi.org/10.1016/j.apenergy.2010.07.039S80581388

    On-site forest fire smoke detection by low-power autonomous vision sensor

    Get PDF
    Early detection plays a crucial role to prevent forest fires from spreading. Wireless vision sensor networks deployed throughout high-risk areas can perform fine-grained surveillance and thereby very early detection and precise location of forest fires. One of the fundamental requirements that need to be met at the network nodes is reliable low-power on-site image processing. It greatly simplifies the communication infrastructure of the network as only alarm signals instead of complete images are transmitted, anticipating thus a very competitive cost. As a first approximation to fulfill such a requirement, this paper reports the results achieved from field tests carried out in collaboration with the Andalusian Fire-Fighting Service (INFOCA). Two controlled burns of forest debris were realized (www.youtube.com/user/vmoteProject). Smoke was successfully detected on-site by the EyeRISTM v1.2, a general-purpose autonomous vision system, built by AnaFocus Ltd., in which a vision algorithm was programmed. No false alarm was triggered despite the significant motion other than smoke present in the scene. Finally, as a further step, we describe the preliminary laboratory results obtained from a prototype vision chip which implements, at very low energy cost, some image processing primitives oriented to environmental monitoring.Ministerio de Ciencia e Innovación 2006-TIC-2352, TEC2009-1181

    Focal-plane generation of multi-resolution and multi-scale image representation for low-power vision applications

    Get PDF
    Early vision stages represent a considerably heavy computational load. A huge amount of data needs to be processed under strict timing and power requirements. Conventional architectures usually fail to adhere to the specifications in many application fields, especially when autonomous vision-enabled devices are to be implemented, like in lightweight UAVs, robotics or wireless sensor networks. A bioinspired architectural approach can be employed consisting of a hierarchical division of the processing chain, conveying the highest computational demand to the focal plane. There, distributed processing elements, concurrent with the photosensitive devices, influence the image capture and generate a pre-processed representation of the scene where only the information of interest for subsequent stages remains. These focal-plane operators are implemented by analog building blocks, which may individually be a little imprecise, but as a whole render the appropriate image processing very efficiently. As a proof of concept, we have developed a 176x144-pixel smart CMOS imager that delivers lighter but enriched representations of the scene. Each pixel of the array contains a photosensor and some switches and weighted paths allowing reconfigurable resolution and spatial filtering. An energy-based image representation is also supported. These functionalities greatly simplify the operation of the subsequent digital processor implementing the high level logic of the vision algorithm. The resulting figures, 5.6m W@30fps, permit the integration of the smart image sensor with a wireless interface module (Imote2 from Memsic Corp.) for the development of vision-enabled WSN applications.Junta de Andalucía 2006-TIC-2352Ministerio de Ciencia e Innovación TEC 2009-11812Office of Naval Research (USA) N00014111031

    Insect-vision inspired collision warning vision processor for automobiles

    Get PDF
    Vision is expected to play important roles for car safety enhancement. Imaging systems can be used to enlarging the vision field of the driver. For instance capturing and displaying views of hidden areas around the car which the driver can analyze for safer decision-making. Vision systems go a step further. They can autonomously analyze the visual information, identify dangerous situations and prompt the delivery of warning signals. For instance in case of road lane departure, if an overtaking car is in the blind spot, if an object is approaching within collision course, etc. Processing capabilities are also needed for applications viewing the car interior such as >intelligent airbag systems> that base deployment decisions on passenger features. On-line processing of visual information for car safety involves multiple sensors and views, huge amount of data per view and large frame rates. The associated computational load may be prohibitive for conventional processing architectures. Dedicated systems with embedded local processing capabilities may be needed to confront the challenges. This paper describes a dedicated sensory-processing architecture for collision warning which is inspired by insect vision. Particularly, the paper relies on the exploitation of the knowledge about the behavior of Locusta Migratoria to develop dedicated chips and systems which are integrated into model cars as well as into a commercial car (Volvo XC90) and tested to deliver collision warnings in real traffic scenarios.Gobierno de España TEC2006-15722European Community IST:2001-3809

    ACE16K: The Third Generation of Mixed-Signal SIMD-CNN ACE Chips Toward VSoCs

    Get PDF
    Today, with 0.18-μm technologies mature and stable enough for mixed-signal design with a large variety of CMOS compatible optical sensors available and with 0.09-μm technologies knocking at the door of designers, we can face the design of integrated systems, instead of just integrated circuits. In fact, significant progress has been made in the last few years toward the realization of vision systems on chips (VSoCs). Such VSoCs are eventually targeted to integrate within a semiconductor substrate the functions of optical sensing, image processing in space and time, high-level processing, and the control of actuators. The consecutive generations of ACE chips define a roadmap toward flexible VSoCs. These chips consist of arrays of mixed-signal processing elements (PEs) which operate in accordance with single instruction multiple data (SIMD) computing architectures and exhibit the functional features of CNN Universal Machines. They have been conceived to cover the early stages of the visual processing path in a fully-parallel manner, and hence more efficiently than DSP-based systems. Across the different generations, different improvements and modifications have been made looking to converge with the newest discoveries of neurobiologists regarding the behavior of natural retinas. This paper presents considerations pertaining to the design of a member of the third generation of ACE chips, namely to the so-called ACE16k chip. This chip, designed in a 0.35-μm standard CMOS technology, contains about 3.75 million transistors and exhibits peak computing figures of 330 GOPS, 3.6 GOPS/mm2 and 82.5 GOPS/W. Each PE in the array contains a reconfigurable computing kernel capable of calculating linear convolutions on 3×3 neighborhoods in less than 1.5 μs, imagewise Boolean combinations in less than 200 ns, imagewise arithmetic operations in about 5 μs, and CNN-like temporal evolutions with a time constant of about 0.5 μs. Unfortunately, the many ideas underlying the design of this chip cannot be covered in a single paper; hence, this paper is focused on, first, placing the ACE16k in the ACE chip roadmap and, then, discussing the most significant modifications of ACE16K versus its predecessors in the family.LOCUST IST2001—38 097VISTA TIC2003—09 817 - C02—01Office of Naval Research N000 140 210 88
    corecore